37 research outputs found
The Query Complexity of Mastermind with l_p Distances
Consider a variant of the Mastermind game in which queries are l_p distances, rather than the usual Hamming distance. That is, a codemaker chooses a hidden vector y in {-k,-k+1,...,k-1,k}^n and answers to queries of the form ||y-x||_p where x in {-k,-k+1,...,k-1,k}^n. The goal is to minimize the number of queries made in order to correctly guess y.
In this work, we show an upper bound of O(min{n,(n log k)/(log n)}) queries for any real 10. Thus, essentially any approximation of this problem is as hard as finding the hidden vector exactly, up to constant factors. Finally, we show that for the noisy version of the problem, i.e., the setting when the codemaker answers queries with any q = (1 +/- epsilon)||y-x||_p, there is no query efficient algorithm
Performance of Regularization for Sparse Convex Optimization
Despite widespread adoption in practice, guarantees for the LASSO and Group
LASSO are strikingly lacking in settings beyond statistical problems, and these
algorithms are usually considered to be a heuristic in the context of sparse
convex optimization on deterministic inputs. We give the first recovery
guarantees for the Group LASSO for sparse convex optimization with
vector-valued features. We show that if a sufficiently large Group LASSO
regularization is applied when minimizing a strictly convex function , then
the minimizer is a sparse vector supported on vector-valued features with the
largest norm of the gradient. Thus, repeating this procedure selects
the same set of features as the Orthogonal Matching Pursuit algorithm, which
admits recovery guarantees for any function with restricted strong
convexity and smoothness via weak submodularity arguments. This answers open
questions of Tibshirani et al. and Yasuda et al. Our result is the first to
theoretically explain the empirical success of the Group LASSO for convex
functions under general input instances assuming only restricted strong
convexity and smoothness. Our result also generalizes provable guarantees for
the Sequential Attention algorithm, which is a feature selection algorithm
inspired by the attention mechanism proposed by Yasuda et al.
As an application of our result, we give new results for the column subset
selection problem, which is well-studied when the loss is the Frobenius norm or
other entrywise matrix losses. We give the first result for general loss
functions for this problem that requires only restricted strong convexity and
smoothness
Sharper Bounds for Sensitivity Sampling
In large scale machine learning, random sampling is a popular way to
approximate datasets by a small representative subset of examples. In
particular, sensitivity sampling is an intensely studied technique which
provides provable guarantees on the quality of approximation, while reducing
the number of examples to the product of the VC dimension and the total
sensitivity in remarkably general settings. However, guarantees
going beyond this general bound of are known in perhaps only
one setting, for subspace embeddings, despite intense study of
sensitivity sampling in prior work. In this work, we show the first bounds for
sensitivity sampling for subspace embeddings for that
improve over the general bound, achieving a bound of roughly
for and for .
For , we show that this bound is tight, in the sense that there
exist matrices for which samples is necessary. Furthermore,
our techniques yield further new results in the study of sampling algorithms,
showing that the root leverage score sampling algorithm achieves a bound of
roughly for , and that a combination of leverage score and
sensitivity sampling achieves an improved bound of roughly for . Our sensitivity sampling results yield the best
known sample complexity for a wide class of structured matrices that have small
sensitivity.Comment: To appear in ICML 202
High-Dimensional Geometric Streaming in Polynomial Space
Many existing algorithms for streaming geometric data analysis have been
plagued by exponential dependencies in the space complexity, which are
undesirable for processing high-dimensional data sets. In particular, once
, there are no known non-trivial streaming algorithms for problems
such as maintaining convex hulls and L\"owner-John ellipsoids of points,
despite a long line of work in streaming computational geometry since [AHV04].
We simultaneously improve these results to bits of
space by trading off with a factor distortion. We
achieve these results in a unified manner, by designing the first streaming
algorithm for maintaining a coreset for subspace embeddings with
space and distortion. Our
algorithm also gives similar guarantees in the \emph{online coreset} model.
Along the way, we sharpen results for online numerical linear algebra by
replacing a log condition number dependence with a dependence,
answering a question of [BDM+20]. Our techniques provide a novel connection
between leverage scores, a fundamental object in numerical linear algebra, and
computational geometry.
For subspace embeddings, we give nearly optimal trade-offs between
space and distortion for one-pass streaming algorithms. For instance, we give a
deterministic coreset using space and
distortion for , whereas previous deterministic algorithms incurred a
factor in the space or the distortion [CDW18].
Our techniques have implications in the offline setting, where we give
optimal trade-offs between the space complexity and distortion of subspace
sketch data structures. To do this, we give an elementary proof of a "change of
density" theorem of [LT80] and make it algorithmic.Comment: Abstract shortened to meet arXiv limits; v2 fix statements concerning
online condition numbe
New Subset Selection Algorithms for Low Rank Approximation: Offline and Online
Subset selection for the rank approximation of an matrix
offers improvements in the interpretability of matrices, as well as a variety
of computational savings. This problem is well-understood when the error
measure is the Frobenius norm, with various tight algorithms known even in
challenging models such as the online model, where an algorithm must select the
column subset irrevocably when the columns arrive one by one. In contrast, for
other matrix losses, optimal trade-offs between the subset size and
approximation quality have not been settled, even in the offline setting. We
give a number of results towards closing these gaps.
In the offline setting, we achieve nearly optimal bicriteria algorithms in
two settings. First, we remove a factor from a result of [SWZ19] when
the loss function is any entrywise loss with an approximate triangle inequality
and at least linear growth. Our result is tight for the loss. We give
a similar improvement for entrywise losses for , improving a
previous distortion of to . Our results come from a
technique which replaces the use of a well-conditioned basis with a slightly
larger spanning set for which any vector can be expressed as a linear
combination with small Euclidean norm. We show that this technique also gives
the first oblivious subspace embeddings for with distortion, which is nearly optimal and closes a long line of work.
In the online setting, we give the first online subset selection algorithm
for subspace approximation and entrywise low rank
approximation by implementing sensitivity sampling online, which is challenging
due to the sequential nature of sensitivity sampling. Our main technique is an
online algorithm for detecting when an approximately optimal subspace changes
substantially.Comment: To appear in STOC 2023; abstract shortene
Sketching Algorithms for Sparse Dictionary Learning: PTAS and Turnstile Streaming
Sketching algorithms have recently proven to be a powerful approach both for
designing low-space streaming algorithms as well as fast polynomial time
approximation schemes (PTAS). In this work, we develop new techniques to extend
the applicability of sketching-based approaches to the sparse dictionary
learning and the Euclidean -means clustering problems. In particular, we
initiate the study of the challenging setting where the dictionary/clustering
assignment for each of the input points must be output, which has
surprisingly received little attention in prior work. On the fast algorithms
front, we obtain a new approach for designing PTAS's for the -means
clustering problem, which generalizes to the first PTAS for the sparse
dictionary learning problem. On the streaming algorithms front, we obtain new
upper bounds and lower bounds for dictionary learning and -means clustering.
In particular, given a design matrix in a
turnstile stream, we show an space
upper bound for -sparse dictionary learning of size , an space upper bound for -means clustering, as
well as an space upper bound for -means clustering on random
order row insertion streams with a natural "bounded sensitivity" assumption. On
the lower bounds side, we obtain a general lower bound for -means clustering, as well as an
lower bound for algorithms which can estimate the
cost of a single fixed set of candidate centers.Comment: To appear in NeurIPS 202